The RepEval 2017 Shared Task aims to evaluate natural language understandingmodels for sentence representation, in which a sentence is represented as afixed-length vector with neural networks and the quality of the representationis tested with a natural language inference task. This paper describes oursystem (alpha) that is ranked among the top in the Shared Task, on both thein-domain test set (obtaining a 74.9% accuracy) and on the cross-domain testset (also attaining a 74.9% accuracy), demonstrating that the model generalizeswell to the cross-domain data. Our model is equipped with intra-sentencegated-attention composition which helps achieve a better performance. Inaddition to submitting our model to the Shared Task, we have also tested it onthe Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracyof 85.5%, which is the best reported result on SNLI when cross-sentenceattention is not allowed, the same condition enforced in RepEval 2017.
展开▼